All (110)Personal Introduction (1)AI Summary (8)Project Management (4)Investment Strategy (9)Financial Market Analysis (6)AI Software Engineering (9)AI Research (7)Content Management (9)Cognitive Development (1)Philosophical Reflection (3)Entrepreneurship (1)AI Tools Development (9)Product Development (3)Troubleshooting (2)Quantitative Finance (12)AI Cost Analysis (1)System Architecture (1)Simulation Framework (1)Technical Log (3)AI Social Systems (3)Personal Reflection (1)
Reflections on AI Programming Practices: OOP and Over-Compatibility Issues
AI Software Engineering
👤 AI developers, programmers, technical managers, and tech enthusiasts interested in AI programming practices
Based on records from February 3, 2026, this article reflects on issues in AI programming practices. The author points out that even SOTA models like Opus 4.5 are not suitable for writing object-oriented programming code, as AI lacks deep understanding and modeling capabilities for business domains. AI can repay technical debt through specific processes, but OOP and over-compatibility remain major challenges. Over-compatibility leads to bloated code and cognitive collapse, while programming is essentially a cognitive science problem that requires handling unique business needs. The article emphasizes that cognitive development is a spiral process, where code and experiments help gain cognition, leading to better code writing.
- ✨ AI is unsuitable for writing object-oriented programming code due to a lack of business modeling capabilities
- ✨ Even SOTA models like Opus 4.5 have this limitation
- ✨ AI can repay technical debt through specific processes
- ✨ Over-compatibility leads to bloated code and cognitive collapse
- ✨ Programming is essentially a cognitive science problem that requires handling unique business needs
📅 2026-02-03 · 364 words · ~2 min read
Application of AI Autonomy and Scientific View Alignment in RFC Design
AI Software Engineering
👤 Software engineers, AI researchers, technical managers, and those interested in human-machine collaboration and agile development
This article discusses the importance of AI autonomy in software engineering, particularly in RFC (Request for Comments) design. The author points out that the core of AI autonomy is the alignment of scientific views, meaning AI needs to understand and follow human scientific concepts and methodologies, such as the Occam's Razor principle, to avoid over-engineering and complexity. The article suggests using an adversarial generation architecture, where a review AI questions the design choices of a generation AI, supported by fact constraints. These facts must be third-party verifiable, potentially through experimental code validation. The ultimate goal is to achieve efficient AI autonomy, reduce human intervention costs, and promote agile development models.
- ✨ The core of AI autonomy lies in scientific view alignment, requiring adherence to human scientific concepts
- ✨ Apply Occam's Razor principle to simplify RFC design and avoid unnecessary complexity
- ✨ Adopt adversarial generation architecture, where a review AI questions the design choices of a generation AI
- ✨ Design choices should be based on fact constraints, with facts being third-party verifiable
- ✨ Validate facts through experimental code, referencing scientific methods
📅 2026-01-29 · 592 words · ~3 min read
Analysis of Programmers' Future and Software Demand Growth Trends in the AI Era
AI Software Engineering
👤 Programmers, technology professionals, individuals interested in AI and future trends
Starting from a personal routine at 4 AM in 2026, this article discusses the impact of AI on programmers' careers. The author argues that the notion of programmer unemployment is too simplistic, with the key being whether demand grows. The article identifies three growth points for future software demand: increased personalized needs, decentralization trends (especially in traffic distribution systems), and programmable everything (including IoT, VR/AR, etc.). It concludes by emphasizing that the future is an era of taste, where individuals need to enhance their taste and influence, and products will return to a taste-driven development direction.
- ✨ The notion of programmer unemployment under AI impact must consider demand-side growth
- ✨ Personalized needs will significantly increase software demand
- ✨ Decentralization trends bring new product forms and demand for traffic distribution systems
- ✨ Programmable everything extends front-end development to physical devices and new fields
- ✨ The future is an era of taste, where individuals need to enhance taste and influence
📅 2026-01-28 · 909 words · ~5 min read
Analysis and Improvement Plan for Agent Performance in Translation Tasks
AI Software Engineering
👤 AI developers, natural language processing researchers, technical personnel interested in Agent and LLM technologies
This article analyzes why Agents underperform compared to one-shot LLMs in translation tasks, including issues such as high token usage, decreased translation quality, and YAML Frontmatter format errors. The author argues that Agent design is better suited for multi-step reasoning and decision-making tasks, and their context management strategies prevent effective utilization of information for translation. The article also mentions that Agents may enter infinite loops when translating low-resource languages. To address these problems, the author proposes two improvement plans: using an Agents/Sub-Agents framework to decompose translation tasks or assembling low-level one-shot LLM APIs via Skills. The author prefers the first approach and discusses OpenCode's support for complex Agent calls. Finally, the article reviews the update logs for CZON versions 0.5.0 to 0.5.2, including integration with OpenCode, network issue fixes, and rollback of Agent translation features.
- ✨ Agents underperform compared to one-shot LLMs in translation tasks
- ✨ Agents use 10 times more tokens than LLMs
- ✨ Translation quality decreases, with YAML Frontmatter format errors
- ✨ Agent design is better suited for multi-step reasoning and decision-making tasks
- ✨ Context management strategies prevent effective utilization of information
📅 2026-01-23 · 424 words · ~2 min read
Reflections on AI Development Experience: Limitations and Improvement Directions of LLMs from Building CZONE
AI Software Engineering
👤 AI developers, LLM researchers, technology enthusiasts, and individuals interested in the application of AI in software development.
This article documents the author's experience on January 19, 2026, of building the online version of CZON (CZONE) from scratch using OpenCode and MiniMax M2.1. AI was fast in technology selection, scaffolding setup, and feature design, but showed insufficient detail understanding when handling GitHub REST API permission issues, particularly failing to recognize the special permission requirements for the .workflows directory. The author points out that LLMs suffer from attention dispersion and weak reasoning abilities in debugging mode, suggesting the introduction of a 'lab mode' for controlled experimental validation. Additionally, OpenCode lacks browser manipulation capabilities, leading to debugging relying on manual log inspection; the author recommends integrating end-to-end testing frameworks like Cypress or Playwright. Moreover, AI development pace is too fast, lacking architectural layering and quality assurance, which the author metaphorically describes as 'floodwater,' emphasizing the need for correct concepts, abstraction, and implementation. The article concludes with a plumber-fixing-a-leak analogy, implying that AI development requires systematic solutions to root problems rather than temporary fixes.
- ✨ AI shows insufficient detail understanding in GitHub API permission handling, especially for special permissions in the .workflows directory
- ✨ LLMs suffer from attention dispersion and weak reasoning in debugging mode; introducing lab mode for controlled experiments is recommended
- ✨ OpenCode lacks browser manipulation capabilities, making debugging reliant on manual inspection; integrating end-to-end testing frameworks is suggested
- ✨ AI development pace is too fast, lacking architectural layering and quality assurance, requiring more systematic development methods
- ✨ The author metaphorically describes AI as a 'brain in a vat' and 'floodwater,' emphasizing the importance of closed-loop thinking and energy allocation
📅 2026-01-19 · 767 words · ~4 min read
Thoughts on AI Agent Module-Level Software Engineering Architecture Design
AI Software Engineering
👤 Software engineers, AI developers, technical personnel interested in automated programming and human-machine collaboration
This article documents the author's thoughts on January 12, 2026, regarding the application of AI Agents in module-level software engineering. The author proposes a human-machine collaborative architecture, with key points including using git worktree to manage code repositories, invoking AI Agents (such as Claude Code) via CLI and managing sessions, obtaining Agent completion notifications and conversation history for transparency. The author plans to implement an automated script that assigns each task to an independent Agent session and coordinates workflows through a scheduler. The article emphasizes the advantages of using Agents over directly calling LLM APIs, as Agents can handle underlying complexities (such as exploring code repositories, invoking system commands, context management), avoiding reinventing the wheel. The author intends to first implement a simplified version to validate the concept.
- ✨ Design a module-level human-machine collaborative software engineering architecture
- ✨ Use git worktree to manage code repositories and setup scripts
- ✨ Invoke AI Agents (such as Claude Code) via CLI to start sessions
- ✨ Obtain AI Agent completion notifications and conversation history for transparency
- ✨ Implement automated scripts to assign independent sessions to Agents
📅 2026-01-12 · 345 words · ~2 min read
Observability and Engineering Methods for LLM-Generated Code
AI Software Engineering
👤 Software development engineers, AI application developers, technical managers, and technical personnel interested in LLMs and observability
This article documents a discussion between the author and Hobo on the application of LLM-generated code in production environments. Key points include: LLM-generated code cannot be directly used in production and must be ensured through rigorous testing and observability; observability requires intrusive instrumentation, resource isolation, and alerting systems, with recommendations to embed alert rules into the code; the author and Hobo disagree on the importance of LLM intelligence versus engineering methods—the author believes engineering methods (e.g., prompt chains, testing processes) are more critical at the current stage, while Hobo emphasizes the fundamental role of model intelligence, with both perspectives complementing each other to benefit teams.
- ✨ LLM-generated code cannot be directly used in production environments due to insufficient reliability
- ✨ Observability (e.g., instrumentation, alert rules) is crucial for ensuring long-term service stability
- ✨ Observability requires intrusive implementation and should be combined with resource isolation
- ✨ Alert rules should be embedded in the code to improve collaboration between development and operations
- ✨ Engineering methods (e.g., testing processes) offer greater value for LLM applications at the current stage
📅 2026-01-11 · 892 words · ~4 min read
Reflections on AI Programming Practice: Avoiding OOP and Over-Compatibility
AI Software Engineering
👤 Software developers, AI technology enthusiasts, programming beginners, engineers concerned with code quality and maintainability
This article documents the author's failed experience with AI programming (Vibe Coding), finding that AI-generated object-oriented code is of poor quality and structurally messy, leading to technical debt explosion. The author analyzes reasons including AI's insufficient design capability for OOP paradigms, lack of architectural guidance, and excessive backward compatibility. Key recommendations are proposed: avoid using object-oriented programming and shift to procedural and functional programming; guide AI to understand Occam's Razor principle to reduce code bloat. These measures aim to enhance the quality and maintainability of AI-generated code.
- ✨ AI-generated object-oriented code is of poor quality, structurally messy, leading to technical debt explosion
- ✨ AI has insufficient design capability for OOP paradigms and lacks business domain modeling
- ✨ AI lacks architectural guidance, adopts lazy strategies, resulting in bloated code
- ✨ Excessive backward compatibility increases code complexity and maintenance costs
- ✨ Recommend avoiding OOP and shifting to procedural and functional programming
📅 2026-01-07 · 693 words · ~4 min read
Module-Level Human-Machine Collaborative Software Engineering Architecture Design
AI Software Engineering
👤 Software engineers, AI researchers, technical managers, and practitioners focused on human-machine collaboration and automated software development.
This paper addresses the issues of poor quality, unclear boundaries, and slow speed in existing AI Agents for code module implementation by proposing a module-level human-machine collaborative software engineering architecture. The architecture generates Protocol Spec through rapid intent alignment, then parallelly generates implementation, test, and benchmark specifications, and ensures implementation quality through multi-level arbitration mechanisms. Core designs include layered collaboration, specialized division of labor, and separation of concerns, with clear acceptance criteria (unit test passing, no performance degradation) to establish trust mechanisms and eliminate human control desires. The paper also discusses unresolved issues such as improving Protocol Spec quality and avoiding arbitration loops, and envisions the possibility of using higher-level AI to replace human supervision.
- ✨ Existing AI Agents have issues with poor quality, unclear boundaries, and slow speed in code module implementation
- ✨ Proposes a module-level human-machine collaborative architecture that generates Protocol Spec through rapid intent alignment
- ✨ The architecture adopts layered collaboration, parallelly generating Implementation Spec, Test Spec, and Benchmark Spec
- ✨ Ensures implementation quality through multi-level arbitration mechanisms, reducing human intervention
- ✨ Defines clear acceptance criteria: unit test passing and no performance degradation, to establish trust mechanisms
📅 2026-01-05 · 1,384 words · ~7 min read